14 research outputs found

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    Advanced Machine Learning Methods for Oncological Image Analysis

    No full text
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally-invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head-neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra-dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis.Cancer Ă€r en global hĂ€lsoutmaning som uppskattas ansvara för cirka 10 miljoner dödsfall i hela vĂ€rlden, bara under Ă„ret 2020. Framsteg inom medicinsk bildtagning och hĂ„rdvaruutveckling de senaste tre decennierna har banat vĂ€gen för moderna medicinska bildgivande system vars upplösningsförmĂ„ga tillĂ„ter att fĂ„nga information om tumörers anatomi, fysiologi, funktion samt metabolism. Medicinsk bildanalys har dĂ€rför fĂ„tt en mer betydelserik roll i klinikers dagliga rutiner inom onkologin, för bland annat screening, diagnostik, uppföljning av behandling samt icke-invasiv utvĂ€rdering av sjukdomsprognoser. SjukvĂ„rdens behov av medicinska bilder har lett till att det nu pĂ„ sjukhusen finns en enorm mĂ€ngd medicinska bilder pĂ„ alla moderna sjukhus. Med hĂ€nsyn till den viktiga roll medicinsk bilddata spelar i dagens sjukvĂ„rd, samt den mĂ€ngd manuellt arbete som behöver göras för att analysera den mĂ€ngd data som genereras varje dag, sĂ„ har utvecklingen av digitala verktyg för att för att automatiskt eller semi-automatiskt analysera  bilddatan alltid haft stort intresse. DĂ€rför har en rad maskininlĂ€rningsverktyg utvecklats för analys av onkologisk data, för att gripa sig an lĂ€kares repetitiva vardagssysslor. Den hĂ€r avhandlingen syftar att bidra till fĂ€ltet “onkologisk bildanalys” genom att föreslĂ„ nya sĂ€tt att kvantifiera tumörers egenskaper frĂ„n medicinsk bilddata. Specifikt, Ă€r denna avhandling baserad pĂ„ sex artiklar dĂ€r de första tvĂ„ har fokus att presentera nya metoder för segmentering av tumörer, och de resterande fyra Ă€mnar att utveckla kvantitativa biomarkörer för cancerdiagnostik och prognos. Huvudsyftet för “Studie I” har varit att utveckla en djupinlĂ€rnings-pipeline vars syfte Ă€r att fĂ„nga lungpatalogiers anatomier (inklusive lungtumörer) samt integrera detta med djupa neurala nĂ€tverk för segmentering för att nyttja det första nĂ€tverkets utfall för att förbĂ€ttra segmenteringskvalitĂ©n. Den föreslagna pipelinen testades pĂ„ flertalet dataset och numeriska analyser visar en överlĂ€gsna resultat för den föreslagna “prior-medvetna” djupinlĂ€rningsmetoden. “Studie II” Ă€mnar att ta sig an ett viktig problem som övervakade segmenteringsmetoder stĂ€lls inför: ett beroende av enorma annoterade dataset. I denna studie föreslĂ„s en icke-övervakad segmenteringsmetod som baseras pĂ„ konceptet “ifyllnad” (“inpainting”) för att segmentera tumörer i omrĂ„dena: lungor samt huvud och hals i bilder frĂ„n olika modaliteter. Den föreslagna metoden lyckas bĂ€ttre Ă€n en familj vĂ€letablerade icke-oövervakade segmenteringsmodeller. “Studie III” och “Studie IV” försöker automatiskt diskriminera benigna lungtumörer frĂ„n maligna tumörer genom att analysera bilder frĂ„n LDCT (lĂ„gdos-CT). I “Studie III“ föreslĂ„s ett djupt neuralt nĂ€tverk för klassificering vars grafstruktur tillĂ„ter lokal analys av tumörens inbördes heterogeniteter samt en helhetsbild frĂ„n global kontextuell information. “Studie IV” försöker utvĂ€rdera noggrant utvalda metoder som grundar sig pĂ„ att extrahera anatomiska sĂ€rdrag frĂ„n medicinska bilder. I studien jĂ€mförs konventionella “radiomics”-metoder med sĂ€rdrag frĂ„n neurala nĂ€tverk samt en kombination av bĂ„da pĂ„ samma dataset. Resultat frĂ„n studien visar att en kombination av sĂ€rdrag frĂ„n djupa neurala nĂ€tverk samt “radiomics” kan ge bĂ€ttre resultat i klassificeringsproblemet. “Studie V” har fokus pĂ„ tidig bedömning av lungtumörers respons pĂ„ behandling genom att utveckla ett set nya fysiologisk observerbara sĂ€rdrag. Den presenterade metoden har anvĂ€nts för att kvantifiera förĂ€ndringar i tumörers karaktĂ€r i PET-CT-undersökningar för att predicera patienters prognos tvĂ„ Ă„r efter senaste behandling. Metoden jĂ€mförts mot konventionella “radiomics” och utvĂ€rderingen visar att den föreslagna metoden ger förbĂ€ttrade resultat. Till skilnad frĂ„n “Studie V”, som fokuserar pĂ„ att lösa ett binĂ€rt klassificeringsproblem, sĂ„ försöker “Studie VI” predicera överlevnadsgraden hos patienter med lung- samt huvud och hals-cancer genom att undersöka neurala nĂ€tverk med sfĂ€riska faltningsoperationer. Metoden jĂ€mförs mot, bland annat, “radiomics” och visar liknande resultat för analys pĂ„ samma dataset, men bĂ€ttre resultat för analys pĂ„ olika dataset. Sammanfattningsvis sĂ„ utnyttjar de sex studierna olika medicinska bildgivande system samt en mĂ€ngd olika bildbehandling- och maskininlĂ€rningstekniker för att utveckla verktyg för att kvantifierar tumörers egenskaper, som kan underlĂ€tta faststĂ€llande av diagnos och prognos.QC 2022-08-29</p

    Advanced Machine Learning Methods for Oncological Image Analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally-invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head-neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra-dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis.Cancer Ă€r en global hĂ€lsoutmaning som uppskattas ansvara för cirka 10 miljoner dödsfall i hela vĂ€rlden, bara under Ă„ret 2020. Framsteg inom medicinsk bildtagning och hĂ„rdvaruutveckling de senaste tre decennierna har banat vĂ€gen för moderna medicinska bildgivande system vars upplösningsförmĂ„ga tillĂ„ter att fĂ„nga information om tumörers anatomi, fysiologi, funktion samt metabolism. Medicinsk bildanalys har dĂ€rför fĂ„tt en mer betydelserik roll i klinikers dagliga rutiner inom onkologin, för bland annat screening, diagnostik, uppföljning av behandling samt icke-invasiv utvĂ€rdering av sjukdomsprognoser. SjukvĂ„rdens behov av medicinska bilder har lett till att det nu pĂ„ sjukhusen finns en enorm mĂ€ngd medicinska bilder pĂ„ alla moderna sjukhus. Med hĂ€nsyn till den viktiga roll medicinsk bilddata spelar i dagens sjukvĂ„rd, samt den mĂ€ngd manuellt arbete som behöver göras för att analysera den mĂ€ngd data som genereras varje dag, sĂ„ har utvecklingen av digitala verktyg för att för att automatiskt eller semi-automatiskt analysera  bilddatan alltid haft stort intresse. DĂ€rför har en rad maskininlĂ€rningsverktyg utvecklats för analys av onkologisk data, för att gripa sig an lĂ€kares repetitiva vardagssysslor. Den hĂ€r avhandlingen syftar att bidra till fĂ€ltet “onkologisk bildanalys” genom att föreslĂ„ nya sĂ€tt att kvantifiera tumörers egenskaper frĂ„n medicinsk bilddata. Specifikt, Ă€r denna avhandling baserad pĂ„ sex artiklar dĂ€r de första tvĂ„ har fokus att presentera nya metoder för segmentering av tumörer, och de resterande fyra Ă€mnar att utveckla kvantitativa biomarkörer för cancerdiagnostik och prognos. Huvudsyftet för “Studie I” har varit att utveckla en djupinlĂ€rnings-pipeline vars syfte Ă€r att fĂ„nga lungpatalogiers anatomier (inklusive lungtumörer) samt integrera detta med djupa neurala nĂ€tverk för segmentering för att nyttja det första nĂ€tverkets utfall för att förbĂ€ttra segmenteringskvalitĂ©n. Den föreslagna pipelinen testades pĂ„ flertalet dataset och numeriska analyser visar en överlĂ€gsna resultat för den föreslagna “prior-medvetna” djupinlĂ€rningsmetoden. “Studie II” Ă€mnar att ta sig an ett viktig problem som övervakade segmenteringsmetoder stĂ€lls inför: ett beroende av enorma annoterade dataset. I denna studie föreslĂ„s en icke-övervakad segmenteringsmetod som baseras pĂ„ konceptet “ifyllnad” (“inpainting”) för att segmentera tumörer i omrĂ„dena: lungor samt huvud och hals i bilder frĂ„n olika modaliteter. Den föreslagna metoden lyckas bĂ€ttre Ă€n en familj vĂ€letablerade icke-oövervakade segmenteringsmodeller. “Studie III” och “Studie IV” försöker automatiskt diskriminera benigna lungtumörer frĂ„n maligna tumörer genom att analysera bilder frĂ„n LDCT (lĂ„gdos-CT). I “Studie III“ föreslĂ„s ett djupt neuralt nĂ€tverk för klassificering vars grafstruktur tillĂ„ter lokal analys av tumörens inbördes heterogeniteter samt en helhetsbild frĂ„n global kontextuell information. “Studie IV” försöker utvĂ€rdera noggrant utvalda metoder som grundar sig pĂ„ att extrahera anatomiska sĂ€rdrag frĂ„n medicinska bilder. I studien jĂ€mförs konventionella “radiomics”-metoder med sĂ€rdrag frĂ„n neurala nĂ€tverk samt en kombination av bĂ„da pĂ„ samma dataset. Resultat frĂ„n studien visar att en kombination av sĂ€rdrag frĂ„n djupa neurala nĂ€tverk samt “radiomics” kan ge bĂ€ttre resultat i klassificeringsproblemet. “Studie V” har fokus pĂ„ tidig bedömning av lungtumörers respons pĂ„ behandling genom att utveckla ett set nya fysiologisk observerbara sĂ€rdrag. Den presenterade metoden har anvĂ€nts för att kvantifiera förĂ€ndringar i tumörers karaktĂ€r i PET-CT-undersökningar för att predicera patienters prognos tvĂ„ Ă„r efter senaste behandling. Metoden jĂ€mförts mot konventionella “radiomics” och utvĂ€rderingen visar att den föreslagna metoden ger förbĂ€ttrade resultat. Till skilnad frĂ„n “Studie V”, som fokuserar pĂ„ att lösa ett binĂ€rt klassificeringsproblem, sĂ„ försöker “Studie VI” predicera överlevnadsgraden hos patienter med lung- samt huvud och hals-cancer genom att undersöka neurala nĂ€tverk med sfĂ€riska faltningsoperationer. Metoden jĂ€mförs mot, bland annat, “radiomics” och visar liknande resultat för analys pĂ„ samma dataset, men bĂ€ttre resultat för analys pĂ„ olika dataset. Sammanfattningsvis sĂ„ utnyttjar de sex studierna olika medicinska bildgivande system samt en mĂ€ngd olika bildbehandling- och maskininlĂ€rningstekniker för att utveckla verktyg för att kvantifierar tumörers egenskaper, som kan underlĂ€tta faststĂ€llande av diagnos och prognos.QC 2022-08-29</p

    Brain Tumor Target Volume Segmentation: Local Region Based Approach

    No full text
    In this paper, we comprehensively evaluated clinical application of local robust-region based algorithms to delineate the brain target volumes in radiation therapy treatment planning. Localized region based algorithms can optimize processing time of manual target tumor delineation and have perfect correlation with manual delineation defined by oncologist due to high deformability. Accordingly, they can receive much attention in radiation therapy treatment planning. Firstly, clinical target volumes (CTVs) of 135 slices in 18 patients were manually defined by two oncologists and the average of these contours considered as references in order to compare with semi-automatic results from different four algorithms. Then, four localized region based algorithms named Localizing Region Based Active Contour (LRBAC), Local Chan-Vese Model (LCV), Local Region Chan-Vese Model (LRCV) and Local Gaussian Distribution Fitting (LGDF) were applied to outline CTVs. Finally, comparisons between semiautomatic results and baselines were done according to three different metric criteria: Dice coefficient, Hausdorff distance, and mean absolute distance. Manual delineation processing times of target tumors were also performed. Our result showed that LCV has advantage over other algorithms in terms of the processing time and afterward LRCV is the second fastest method. LRBAC was the second slowest technique; however, we found that processing speed in LRBAC can be almost doubled by replacing the time-consuming re-initialization process with energy penalizing term. Accordingly, due to high accuracy performance of LRBAC algorithm, it can be concluded that the modified version of LRBAC has the best performance in brain target volumes in radiation therapy treatment planning among other localized algorithms in terms of speed and accurac

    Relationship between job stress and work-related quality of life among emergency medical technicians: a cross-sectional study

    No full text
    Objective This study was aimed to determine the relationship between job stress and work-related quality of life (WRQoL) among emergency medical technicians (EMTs) in Lorestan province, Western Iran.Design This was a cross-sectional study.Methods Totally 430 EMTs who had been engaged in their respective units for more than 6 months from all emergency facilities in Lorestan province were selected using single stage cluster sampling method. Data were collected from April to July 2019 using two standard questionnaires: job stress (Health and Safety Executive (HSE)) and WRQoL. The OR with 95% CI was used to declare the statistical association (p≀0.05).Results All participants were exclusively males, with a mean age of 32±6.87 years. The overall average score of job stress using the HSE scale was 2.69±0.43; while the overall quality of working life score was 2.48±1.01. The type of working shift was found to have a significant impact on the HSE-average score (F(3,417)=5.26, p=0.01); and on the WRQoL-average score (F(3,417)=6.89, p&lt;0.01).Conclusion Two-thirds of EMTs working in governmental hospitals had job stress and a low quality of work-related life. Additionally, work shift was statistically significant associated with EMTs’ job stress and WRQoL

    Emg-based facial gesture recognition through versatile elliptic basis function neural network

    Get PDF
    Recently, the recognition of different facial gestures using facial neuromuscular activities has been proposed for human machine interfacing applications. Facial electromyograms (EMGs) analysis is a complicated field in biomedical signal processing where accuracy and low computational cost are significant concerns. In this paper, a very fast versatile elliptic basis function neural network (VEBFNN) was proposed to classify different facial gestures. The effectiveness of different facial EMG time-domain features was also explored to introduce the most discriminating.Methods: In this study, EMGs of ten facial gestures were recorded from ten subjects using three pairs of surface electrodes in a bi-polar configuration. The signals were filtered and segmented into distinct portions prior to feature extraction. Ten different time-domain features, namely, Integrated EMG, Mean Absolute Value, Mean Absolute Value Slope, Maximum Peak Value, Root Mean Square, Simple Square Integral, Variance, Mean Value, Wave Length, and Sign Slope Changes were extracted from the EMGs. The statistical relationships between these features were investigated by Mutual Information measure. Then, the feature combinations including two to ten single features were formed based on the feature rankings appointed by Minimum-Redundancy-Maximum-Relevance (MRMR) and Recognition Accuracy (RA) criteria. In the last step, VEBFNN was employed to classify the facial gestures. The effectiveness of single features as well as the feature sets on the system performance was examined by considering the two major metrics, recognition accuracy and training time. Finally, the proposed classifier was assessed and compared with conventional methods support vector machines and multilayer perceptron neural network.Results: The average classification results showed that the best performance for recognizing facial gestures among all single/multi-features was achieved by Maximum Peak Value with 87.1% accuracy. Moreover, the results proved a very fast procedure since the training time during classification via VEBFNN was 0.105 seconds. It was also indicated that MRMR was not a proper criterion to be used for making more effective feature sets in comparison with RA.Conclusions: This work was accomplished by introducing the most discriminating facial EMG time-domain feature for the recognition of different facial gestures; and suggesting VEBFNN as a promising method in EMG-based facial gesture classification to be used for designing interfaces in human machine interaction system

    Preparation of NR/BR Compound Using ZnO Nanoparticles

    No full text
    Elastomer compound based on NR/BR was prepared using different kinds of nano-ZnO and ZnO. The effect of zinc oxide as an activator of sulfur cure on the mechanical and morphological properties of the compound was studied. The result of the mechanical properties showed that the tensile strength, elongation-at-break, resilience, and abrasion resistance of the compound prepared by ZnO (4 phr) are equal to the compounds using nano-ZnO (Iran, 1 phr) and nano-Zno (China, 2 phr). The cure time of the compound with nano-Zno is less than that of the compound using ZnO while the scorch time of those compounds is the same. The SEM of the samples showed that the particle size of nano-ZnO (Iran) is smaller than that of ZnO and nano-Zno (China) and also uniformly dispersed

    Robust facial expression recognition for MuCI: A comprehensive neuromuscular signal analysis

    No full text
    This paper presents a comprehensive study on the analysis of neuromuscular signal activities to recognize 11 facial expressions for muscle computer interfacing applications. A robust denoising protocol comprised of Wavelet transform and Kalman filtering is proposed to enhance the electromyogram (EMG) signal-to-noise ratio and improve classification performance. The effectiveness of eight different time-domain facial EMG features on system performance is examined and compared in order to identify the most discriminative one. Fourteen pattern recognition-based algorithms are employed to classify the extracted features. These classifiers are evaluated in terms of classification accuracy and processing time. Finally, the best methods that obtain almost identical system performance are compared through the Normalized Mutual Information (NMI) criterion and a repeated measure analysis of variance (ANOVA) for a statistical significant test.To clarify the impact of signal denoising, all considered EMG features and classifiers are assessed with and without this stage. Results show that: (1) the proposed denosing step significantly improves the system performance; (2) root mean square is the most discriminative facial EMG feature; (3) discriminant analysis when the parameters are estimated by the Maximum Likelihood algorithm achieves the highest classification accuracy and NMI; however, ANOVA reveals no significant difference among the best methods with almost similar performance
    corecore